# Instruction optimization
Phi Mini MoE Instruct GGUF
MIT
Phi-mini-MoE is a lightweight Mixture of Experts (MoE) model suitable for English business and research scenarios, excelling in resource-constrained environments and low-latency scenarios.
Large Language Model English
P
gabriellarson
2,458
1
Qwen3 Embedding 4B GGUF
Apache-2.0
Qwen3-Embedding-4B is a text embedding model built on the Qwen3 series, designed specifically for text embedding and ranking tasks, and performs excellently in multilingual text processing and code retrieval.
Text Embedding
Q
Mungert
723
1
Qwen3 0.6B GGUF
Apache-2.0
Qwen3 is the latest version of the Tongyi Qianwen series of large language models, offering a range of dense and Mixture of Experts (MoE) models. Based on large-scale training, Qwen3 has achieved breakthrough progress in reasoning capabilities, instruction following, agent functionalities, and multilingual support.
Large Language Model English
Q
prithivMLmods
290
1
Esotericknowledge 24B
This is a 24B-parameter merged language model, utilizing the TIES method to fuse multiple 24B-scale pre-trained models, focusing on providing high-quality text generation and comprehension capabilities.
Large Language Model
Transformers

E
yamatazen
122
4
Qwen2.5 7B YOYO Super
Apache-2.0
Qwen2.5-7B-YOYO-super is an optimized open-source large language model achieved by merging base models and fine-tuned models, focusing on enhancing instruction-following, mathematical, and coding capabilities.
Large Language Model
Transformers Supports Multiple Languages

Q
YOYO-AI
17
3
Qwen2.5 1.5B Instruct GGUF
The GGUF format file of the Qwen2.5-1.5B-Instruct model, suitable for text generation tasks.
Large Language Model
Q
MaziyarPanahi
183.11k
6
Llama 3.1 Storm 8B
Llama-3.1-Storm-8B is a model developed based on Llama-3.1-8B-Instruct, aiming to improve the dialogue and function call capabilities of models with 8 billion parameters.
Large Language Model
Transformers Supports Multiple Languages

L
akjindal53244
22.93k
176
Badger Lambda Llama 3 8b
Badger is a Llama3 8B instruction model generated through recursive maximum pairwise disjoint normalized denoising Fourier interpolation method, incorporating features from multiple high-quality models.
Large Language Model
Transformers

B
maldv
24
11
Merge Mayhem L3 V2.1
This is a collection of pre-trained language models merged using the mergekit tool, based on the Llama-3-8B architecture and multiple derivative models.
Large Language Model
Transformers

M
saishf
19
1
Noro Hermes 3x7B
Apache-2.0
Noro-Hermes-3x7B is a Mixture of Experts (MoE) model built using the lazy merge toolkit, combining three 7B-parameter Mistral variant models with capabilities in intelligent assistance, creative role-playing, and general task processing.
Large Language Model
Transformers

N
ThomasComics
16
1
13B Thorns L2
13B-Thorns is an instruction-based integrated and merged model of LLaMAv2-13B, using the Alpaca format, combining the advantages of multiple models to provide powerful language processing capabilities.
Large Language Model
Transformers Other

1
CalderaAI
386
16
Featured Recommended AI Models